Národní úložiště šedé literatury Nalezeno 2 záznamů.  Hledání trvalo 0.00 vteřin. 
Some Robust Estimation Tools for Multivariate Models
Kalina, Jan
Standard procedures of multivariate statistics and data mining for the analysis of multivariate data are known to be vulnerable to the presence of outlying and/or highly influential observations. This paper has the aim to propose and investigate specific approaches for two situations. First, we consider clustering of categorical data. While attention has been paid to sensitivity of standard statistical and data mining methods for categorical data only recently, we aim at modifying standard distance measures between clusters of such data. This allows us to propose a hierarchical agglomerative cluster analysis for two-way contingency tables with a large number of categories, based on a regularized measure of distance between two contingency tables. Such proposal improves the robustness to the presence of measurement errors for categorical data. As a second problem, we investigate the nonlinear version of the least weighted squares regression for data with a continuous response. Our aim is to propose an efficient algorithm for the least weighted squares estimator, which is formulated in a general way applicable to both linear and nonlinear regression. Our numerical study reveals the computational aspects of the algorithm and brings arguments in favor of its credibility.
Robust Regularized Cluster Analysis for High-Dimensional Data
Kalina, Jan ; Vlčková, Katarína
This paper presents new approaches to the hierarchical agglomerative cluster analysis for high-dimensional data. First, we propose a regularized version of the hierarchical cluster analysis for categorical data with a large number of categories. It exploits a regularized version of various test statistics of homogeneity in contingency tables as the measure of distance between two clusters. Further, our aim is cluster analysis of continuous data with a large number of variables. Various regularization techniques tailor-made for high-dimensional data have been proposed, which have however turned out to suffer from a high sensitivity to the presence of outlying measurements in the data. As a robust solution, we recommend to combine two newly proposed methods, namely a regularized version of robust principal component analysis and a regularized Mahalanobis distance, which is based on an asymptotically optimal regularization of the covariance matrix. We bring arguments in favor of the newly proposed methods.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.